Simultaneous Articulatory and Acoustic Distortion in L1 and L2 Listening: Locally Time-Reversed "Fast" Speech
نویسنده
چکیده
The current study explores how native and non-native speakers cope with simultaneous articulatory and acoustic distortion in speech perception. The articulatory distortion was generated by asking a speaker to articulate target speech as fast as possible (fast speech). The acoustic distortion was created by dividing speech signals into small segments with equal time duration (e.g., 50 ms) from the onset of speech, and flipping every segment on a temporal axis, and putting them back together (locally time-reversed speech). This study explored how “locally time-reversed fast speech” was intelligible as compared to “locally time-reversed normal speech” measured in Ishida, Samuel, and Arai (2016). Participants were native English speakers and native Japanese speakers who spoke English as a second language. They listened to English words and pseudowords that contained a lot of stop consonants. These items were spoken fast and locally time-reversed at every 10, 20, 30, 40, 50, or 60 ms. In general, “locally time-reversed fast speech” became gradually unintelligible as the length of reversed segments increased. Native speakers generally understood locally time-reversed fast spoken words well but not pseudowords, while non-native speakers hardly understood both words and pseudowords. Language proficiency strongly supported the perceptual restoration of locally time-reversed fast speech.
منابع مشابه
The effect of speech rate and noise on bilinguals' speech perception: the case of native speakers of arabic in israel
Listening conditions affect bilinguals’ speech perception, but relatively little is known about the effect of the combination of several degrading listening conditions. We studied the combined effect of speech rate and background noise on bilinguals’ speech perception in their L1 and L2. Speech perception of twenty Israeli university students, native speakers of Arabic (L1), with Hebrew as L2, ...
متن کاملReduction of non-native accents through statistical parametric articulatory synthesis.
This paper presents an articulatory synthesis method to transform utterances from a second language (L2) learner to appear as if they had been produced by the same speaker but with a native (L1) accent. The approach consists of building a probabilistic articulatory synthesizer (a mapping from articulators to acoustics) for the L2 speaker, then driving the model with articulatory gestures from a...
متن کاملData driven articulatory synthesis with deep neural networks
The conventional approach for data-driven articulatory synthesis consists of modeling the joint acoustic-articulatory distribution with a Gaussian mixture model (GMM), followed by a post-processing step that optimizes the resulting acoustic trajectories. This final step can significantly improve the accuracy of the GMM frame-by-frame mapping but is computationally intensive and requires that th...
متن کاملPlace assimilation and articulatory strategies: the case of sibilant sequences in French as L1 and L2
This study focuses on how French heterosyllabic sibilant clusters are produced by one French native speaker and by three Italian learners of French-L2. In French, these clusters are frequent and are reported to show place assimilation; in Italian, on the contrary, they are very rare and speakers are expected to repair the phonotactically marked sequences by epenthesizing a schwa. In the current...
متن کاملTask-Induced Involvement in L2 Vocabulary Learning: A Case for Listening Comprehension
The study aimed at investigating whether the retention of vocabulary acquired incidentally is dependent upon the amount of task-induced involvement. Immediate and delayed retention of twenty unfamiliar words was examined in three learning tasks( listening comprehension + group discussion, listening comprehension + dictionary checking + summary writing in L1, and listening comprehension + dictio...
متن کامل